relu null 1
- North America > Canada > Ontario > Toronto (0.14)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- Asia > China > Hong Kong (0.04)
- Asia > China > Guangdong Province > Guangzhou (0.04)
Barycentric Neural Networks and Length-Weighted Persistent Entropy Loss: A Green Geometric and Topological Framework for Function Approximation
Toscano-Duran, Victor, Gonzalez-Diaz, Rocio, Gutiérrez-Naranjo, Miguel A.
While artificial neural networks are known as universal approximators for continuous functions, many modern approaches rely on overparameterized architectures with high computational cost. In this work, we introduce the Barycentric Neural Network (BNN): a compact shallow architecture that encodes both structure and parameters through a fixed set of base points and their associated barycentric coordinates. We show that the BNN enables the exact representation of continuous piecewise linear functions (CPLFs), ensuring strict continuity across segments. Given that any continuous function on a compact domain can be uniformly approximated by CPLFs, the BNN emerges as a flexible and interpretable tool for function approximation. To enhance geometric fidelity in low-resource scenarios, such as those with few base points to create BNNs or limited training epochs, we propose length-weighted persistent entropy (LWPE): a stable variant of persistent entropy. Our approach integrates the BNN with a loss function based on LWPE to optimize the base points that define the BNN, rather than its internal parameters. Experimental results show that our approach achieves superior and faster approximation performance compared to standard losses (MSE, RMSE, MAE and LogCosh), offering a computationally sustainable alternative for function approximation.
- Europe > Spain > Andalusia > Seville Province > Seville (0.04)
- North America > United States (0.04)
Supplementary Materials for: Training Feedback Spiking Neural Networks by Implicit Differentiation on the Equilibrium State
Input: Network parameters θ; Input data x; Label y; Time steps T; Other hyperparameters; Output: Trained network parameters θ . Calculate the output o and the loss L based on o and y . Update θ based on the gradient-based optimizer. We first prove Theorem 1. Then Theorem 2 is similarly proved. We omit repetitive details here.
- North America > Canada > Ontario > Toronto (0.14)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- Asia > China > Hong Kong (0.04)
- Asia > China > Guangdong Province > Guangzhou (0.04)